1,133 research outputs found

    Statistical constraints on the IR galaxy number counts and cosmic IR background from the Spitzer GOODS survey

    Full text link
    We perform fluctuation analyses on the data from the Spitzer GOODS survey (epoch one) in the Hubble Deep Field North (HDF-N). We fit a parameterised power-law number count model of the form dN/dS = N_o S^{-\delta} to data from each of the four Spitzer IRAC bands, using Markov Chain Monte Carlo (MCMC) sampling to explore the posterior probability distribution in each case. We obtain best-fit reduced chi-squared values of (3.43 0.86 1.14 1.13) in the four IRAC bands. From this analysis we determine the likely differential faint source counts down to 10−8Jy10^{-8} Jy, over two orders of magnitude in flux fainter than has been previously determined. From these constrained number count models, we estimate a lower bound on the contribution to the Infra-Red (IR) background light arising from faint galaxies. We estimate the total integrated background IR light in the Spitzer GOODS HDF-N field due to faint sources. By adding the estimates of integrated light given by Fazio et al (2004), we calculate the total integrated background light in the four IRAC bands. We compare our 3.6 micron results with previous background estimates in similar bands and conclude that, subject to our assumptions about the noise characteristics, our analyses are able to account for the vast majority of the 3.6 micron background. Our analyses are sensitive to a number of potential systematic effects; we discuss our assumptions with regards to noise characteristics, flux calibration and flat-fielding artifacts.Comment: 10 pages; 29 figures (Figure added); correction made to flux scale of Fazio points in Figure

    Bayesian methods of astronomical source extraction

    Get PDF
    We present two new source extraction methods, based on Bayesian model selection and using the Bayesian Information Criterion (BIC). The first is a source detection filter, able to simultaneously detect point sources and estimate the image background. The second is an advanced photometry technique, which measures the flux, position (to sub-pixel accuracy), local background and point spread function. We apply the source detection filter to simulated Herschel-SPIRE data and show the filter's ability to both detect point sources and also simultaneously estimate the image background. We use the photometry method to analyse a simple simulated image containing a source of unknown flux, position and point spread function; we not only accurately measure these parameters, but also determine their uncertainties (using Markov-Chain Monte Carlo sampling). The method also characterises the nature of the source (distinguishing between a point source and extended source). We demonstrate the effect of including additional prior knowledge. Prior knowledge of the point spread function increase the precision of the flux measurement, while prior knowledge of the background has onlya small impact. In the presence of higher noise levels, we show that prior positional knowledge (such as might arise from a strong detection in another waveband) allows us to accurately measure the source flux even when the source is too faint to be detected directly. These methods are incorporated in SUSSEXtractor, the source extraction pipeline for the forthcoming Akari FIS far-infrared all-sky survey. They are also implemented in a stand-alone, beta-version public tool that can be obtained at http://astronomy.sussex.ac.uk/∼\simrss23/sourceMiner\_v0.1.2.0.tar.gzComment: Accepted for publication by ApJ (this version compiled used emulateapj.cls

    Predicting chemoinsensitivity in breast cancer with ’omics/digital pathology data fusion

    Get PDF
    Predicting response to treatment and disease-specific deaths are key tasks in cancer research yet there is a lack of methodologies to achieve these. Large-scale ’omics and digital pathology technologies have led to the need for effective statistical methods for data fusion to extract the most useful patterns from these diverse data types. We present FusionGP, a method for combining heterogeneous data types designed specifically for predicting outcome of treatment and disease. FusionGP is a Gaussian process model that includes a generalization of feature selection for biomarker discovery, allowing for simultaneous, sparse feature selection across multiple data types. Importantly, it can accommodate highly nonlinear structure in the data, and automatically infers the optimal contribution from each input data type. FusionGP compares favourably to several popular classification methods, including the Random Forest classifier, a stepwise logistic regression model and the Support Vector Machine on single data types. By combining gene expression, copy number alteration and digital pathology image data in 119 estrogen receptor (ER)-negative and 345 ER-positive breast tumours, we aim to predict two important clinical outcomes: death and chemoinsensitivity. While gene expression data give the best predictive performance in the majority of cases, the digital pathology data are much better for predicting death in ER cases. Thus, FusionGP is a new tool for selecting informative features from heterogeneous data types and predicting treatment response and prognosis

    Information Processing: Coordination and Control in Large Hotels

    Get PDF
    A number of factors influence the information processing needs of organizations, particularly with respect to the coordination and control mechanisms within a hotel. The authors use a theoretical framework to illustrate alternative mechanisms that can be used to coordinate and control hotel operations

    Understanding the dynamics of segregation bands of simulated granular material in a rotating drum

    Full text link
    Axial segregation of a binary mixture of grains in a rotating drum is studied using Molecular Dynamics (MD) simulations. A force scheme leading to a constant restitution coefficient is used and shows that axial segregation is possible between two species of grains made of identical material differing by size. Oscillatory motion of bands is investigated and the influence of the frictional properties elucidated. The mechanism of bands merging is explained using direct imaging of individual grains

    Accelerating Bayesian hierarchical clustering of time series data with a randomised algorithm

    Get PDF
    We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/

    Homogenising the upper continental crust : the Si isotope evolution of the crust recorded by ancient glacial diamictites

    Get PDF
    This work was supported by PhD funding to MM by the University of St Andrews School of Earth and Environmental Sciences and the Handsel scheme, as well as by NERC grant NE/R002134/1 to PS and NSF grant EAR-1321954 to RR and RG.Twenty-four composite samples of the fine-grained matrix of glacial diamictites deposited from the Mesoarchaean to Palaeozoic have been analysed for their silicon isotope composition and used to establish, for the first time, the long-term secular Si isotope record of the compositional evolution of upper continental crust (UCC). Diamictites with Archaean and Palaeoproterozoic Nd model ages show greater silicon isotope heterogeneity than those with younger model ages (irrespective of depositional age). We attribute the anomalously light Si isotope compositions of some diamictites with Archaean model ages to the presence of glacially milled banded iron formation (BIF), substantiated by the high iron content and Ge/Si in these samples. We infer that relatively heavy Si isotope signatures in some Palaeoproterozoic diamictites (all of which have Archaean Nd model ages) are due to contribution from tonalite-trondhjemite-granodiorites (TTGs), evidenced by the abundance of TTG clasts. By the Neoproterozoic (with model ages ranging from 2.3 to 1.8 Ga), diamictite Si isotope compositions exhibit a range comparable to modern UCC. This reduced variability through time is interpreted as reflecting the decreasing importance of BIF and TTG in post-Archaean continental crust. The secular evolution of Si isotopes in the diamictites offers an independent test of models for the emergence of stable cratons and the onset of horizontal mobile-lid tectonism. The early Archaean UCC was heterogeneous and incorporated significant amounts of isotopically light BIF, but following the late Archaean stabilisation of cratons, coupled with the oxygenation of the atmosphere that led to the reduced neoformation of BIF and diminishing quantities of TTGs, the UCC became increasingly homogeneous. This homogenisation likely occurred via reworking of preexisting crust, as evidenced by Archaean Nd model ages recorded in younger diamictites.Publisher PDFPeer reviewe
    • …
    corecore